32 research outputs found

    Visualization and (Mis)Perceptions in Virtual Reality

    No full text
    Virtual Reality (VR) technologies are now being widely adopted for use in areas as diverse as surgical and military training, architectural design, driving and flight simulation, psychotherapy, and gaming/entertainment. A large range of visual displays (from desktop monitors and head-mounted displays (HMDs) to large projection systems) are all currently being employed where each display technology offers unique advantages as well as disadvantages. In addition to technical considerations involved in choosing a VR interface, it is also critical to consider perceptual and psychophysical factors concerned with visual displays. It is now widely recognized that perceptual judgments of particular spatial properties are different in VR than in the real world. In this paper, we will provide a brief overview of what is currently known about the kinds of perceptual errors that can be observed in virtual environments (VEs). Subsequently we will outline the advantages and disadvantages of particular visual displays by foc using on the perceptual and behavioral constraints that are relevant for each. Overall, the main objective of this paper is to highlight the importance of understanding perceptual issues when evaluating different types of visual simulation in VEs

    Perception and prediction of simple object interactions

    No full text
    For humans, it is useful to be able to visually detect an object's physical properties. One potentially important source of information is the way the object moves and interacts with other objects in the environment. Here, we use computer simulations of a virtual ball bouncing on a horizontal plane to study the correspondence between our ability to estimate the ball's elasticity and to predict its future path. Three experiments were conducted to address (1) perception of the ball's elasticity, (2) interaction with the ball, and (3) prediction of its trajectory. The results suggest that different strategies and information sources are used for passive perception versus actively predicting future behavior

    The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Get PDF
    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

    Image processing device and associated operating method

    No full text
    The invention relates to an image processing device (1, 48, 51) including: several image signal inputs (2-9) for receiving a respective image input signal, the signals being unsynchronized; at least one image signal output (23-26) for emitting at least one image output signal; a combiner (22) for combining the different image input signals to form the image output signal; several synchronizers (14-21), which are respectively connected downstream of the image signal inputs (2-9) and which synchronize the unsynchronized image input signals; and several distorters or rectifiers for distorting or rectifying the individual image input signals before they are combined to form the image output signal. According to the invention, the distorters or rectifiers are formed by the individual synchronizers (14-21) and the image input signals are distorted or rectified independently of one another by one or more synchronizers (14-21). The invention also relates to an associated operating method

    Are recognition deficits following occipital lobe TMS explained by raised detection thresholds?

    No full text
    It is known that transcranial magnetic stimulation (TMS) administered over the occipital pole suppresses recognition of visual objects. Our aim was to ascertain whether this suppression can be interpreted as a change in visual contrast threshold. Four subjects detected the orientation of an U- shaped hook flashed for 21 ms. Under control conditions, mean contrast threshold was found at 0.88 log units Weber contrast. Thresholds were raised if TMS was applied 40-200 ms after the visual stimulus. Maximum elevation was 1.67 log units under TMS at 120 ms stimulus onset asynchrony. This phenomenon can be interpreted as a reduction in signal-to-noise ratio of the visual stimuli by TMS, which can be compensated for by increasing the contrast of the stimuli. (C) 1998 Elsevier Science Ltd. All rights reserved

    Identitätskrisen meistern

    No full text
    Bei Dial-up-Verbindungen ins Internet vergibt der Provider bei jeder Einwahl eine neue IP-Adresse. Das wird schnell zum Problem, wenn aus dem Internet heraus auf solche Rechner zugegriffen werden soll. Abhilfe versprechen spezielle Nameserver-Dienste für wechselnde IP-Adressen

    Visual Information and Compensatory Head Rotations During Postural Stabilisation

    No full text
    This study investigated how human observers use visual information to stabilise posture. Participants were required to stand as still and stable as possible on a soft foam balance pad while fixating a small target at eye-height on a dimly lit lamp. The room was completely darkened such that no other visual information was available. The lamp was placed at either 0.4m, 1.16m, 2,33m, 3.5m, 4.66m or 5.82m distance from the observer. So far, no other study had investigated such a wide range of distances. Head position and orientation was measured at 120 Hz using a Vicon tracking system. Participants wore a helmet with infra-red reflecting markers. Each trial lasted 40 seconds, and 30 sec. breaks were taken between the trials. Room lights were switched on during the breaks in order to prevent complete dark adaptation. Postural stability was calculated by quantifying the most frequent sway velocity that occurred at the sampling frequency. This measure was found to be the most robust measure of postural stability, in comparison to other measures, such as sway trajectory length. Furthermore, RMS values for lateral and frontal sway were computed. Results showed that postural stability significantly decreased with increasing fixation distance. At 0.4m distance, the average sway velocity across 10 participants was 0.85 cm/s, and this value increased to 1.4 cm/s at 5.82m fixation distance. This means that the stability of the observers decreases with increasing fixation distance. With eyes closed, average sway velocity increased to 1.55cm/s. To investigate the influence of the target distance on the fixation behaviour, we analysed the yaw rotation of the head. A positive correlation between head orientation angle and head position in the mid-lateral plane was found. This means that during lateral postural sway, the head makes systematic compensational movements along the yaw-axis when observers aim to maintain fixation straight ahead. The correlation significantly decreased with increasing fixation distance and reached a plateau at about 2.5m. The decrease of postural stability at larger fixation distances also reached a plateau at about 2.5m. No correlation between head orientation and head position in the anterior-posterior plane was found. Further experiments which will also include eye-tracking will investigate how afferent visual information and efferent eye-and head movements contribute to human postural stabilisation performance

    The impact of gravitoinertial cues on the perception of lateral self-motion

    No full text
    It is typically assumed that during passive motion in darkness, velocity and traveled distances are estimated by inertial signals. The forces occurring during linear acceleration can be detected by vestibular and other sensory systems. These inertial forces are in principle indistinguishable from comparable gravitational forces during tilted orientations. In this study, we used this tilt-translation ambiguity to systematically alter gravitoinertial forces and evaluated the effect on the perception of curved-linear translation. Participants were seated in a completely dark room on the MPI Motion Simulator and used a steering wheel to control lateral motion on an arc. A target was briefly flashed in the darkness and participants were asked to move to it. A sideways tilt was applied either in the same or in the opposite direction of lateral movement to attenuate or enhance gravitoinertial forces. Attenuating gravitoinertial forces did not affect distance estimates, whereas enhancing gravitoinertial forces resulted in a significant but small decrease in produced distances. This suggests that self-motion perception in the absence of visual information might not be as strongly influenced by gravitoinertial forces as typically assumed. It might be based more on non-directional sensory information such as noise and vibrations that accompany almost any motion

    Learning System Dynamics: Transfer of Training in a Helicopter Hover Simulator

    No full text
    Transfer of training between the simulation of an inert and an agile helicopter dynamic was assessed involving a quasi-transfer design. The focus of this study was to test the ability of flight-naïve subjects to successfully acquire and transfer the skills required to perform lateral sidestep hover maneuvers in a helicopter simulation. The experiments were performed using the MPI Motion Simulator with its ability to realize a highly realistic 1:1 motion representation of a simulated helicopter maneuver. As a result, the amount of training needed to stabilize either an agile or an inert helicopter dynamic did not differ. A clear positive transfer effect was found for the acquired skills from the agile to the inert dynamics but not from the inert to the agile dynamics
    corecore